We study the task of training regression models with the guarantee of label differential privacy (DP). Based on a global prior distribution on label values, which could be obtained privately, we derive a label DP randomization mechanism that is optimal under a given regression loss function. We prove that the optimal mechanism takes the form of a ``randomized response on bins'', and propose an efficient algorithm for finding the optimal bin values. We carry out a thorough experimental evaluation on several datasets demonstrating the efficacy of our algorithm.
translated by 谷歌翻译
我们引入了一种用于隐私随机变量数值组成的新算法,可用于计算机制组成的准确差分隐私参数。我们的算法实现了$ \ mathrm {polylog}(k)$的运行时间和内存使用量,用于从广泛的机制($ k $ times)中进行自我组合的任务;该类别包括在分析差异私有随机梯度下降中出现的亚采样高斯机制。相比之下,Gopi等人的最新工作。(Neurips 2021)在同一任务中获得了$ \ widetilde {o}(\ sqrt {k})$的运行时间。我们的方法扩展到在同一类中撰写$ k $不同机制的情况,从$ \ wideTilde {o}(k^{1.5})$改善其运行时间和内存使用量到$ \ widetilde {o}(k)$。
translated by 谷歌翻译
隐私损失分配(PLD)在差异隐私(DP)的背景下对机制的隐私损失进行了严格的特征。最近的工作表明,与其他已知方法相比,基于PLD的会计允许更紧密的$(\ Varepsilon,\ delta)$ - DP保证。基于PLD的会计中的一个关键问题是如何在任何指定的离散支持上近似任何(潜在的连续)PLD。我们提出了解决这个问题的新方法。我们的方法都支持悲观的估计,它高估了曲棍球刺激的差异(即$ \ delta $)的任何值的$ \ varepsilon $和乐观的估计,从而低估了曲棍球粘贴的分歧。此外,我们表明,在所有悲观估计中,我们的悲观估计是最好的。实验评估表明,与以前的方法相比,我们的方法可以在更大的离散时间间隔内工作,同时保持相似的误差,但比现有方法更近似。
translated by 谷歌翻译
噪声对比度估计的最新研究表明,从经验上讲,从理论上讲,尽管在对比度损失中拥有更多的“负样本”,但最初在阈值中提高了下游分类的性能,但由于“碰撞覆盖“贸易”,它都会损害下游性能-离开。但是,对比度学习中固有的现象是如此吗?我们在一个简单的理论环境中显示,通过从基础潜在类采样(由Saunshi等人引入(ICML 2019)),产生正对,表明表示(人口)对比度损失的下游性能实际上确实确实确实如此。不会随着负样本的数量降低。一路上,我们在框架中给出了最佳表示形式的结构表征,以进行噪声对比估计。我们还为CIFAR-10和CIFAR-100数据集的理论结果提供了经验支持。
translated by 谷歌翻译
We introduce camouflaged data poisoning attacks, a new attack vector that arises in the context of machine unlearning and other settings when model retraining may be induced. An adversary first adds a few carefully crafted points to the training dataset such that the impact on the model's predictions is minimal. The adversary subsequently triggers a request to remove a subset of the introduced points at which point the attack is unleashed and the model's predictions are negatively affected. In particular, we consider clean-label targeted attacks (in which the goal is to cause the model to misclassify a specific test point) on datasets including CIFAR-10, Imagenette, and Imagewoof. This attack is realized by constructing camouflage datapoints that mask the effect of a poisoned dataset.
translated by 谷歌翻译
The performance of differentially private machine learning can be boosted significantly by leveraging the transfer learning capabilities of non-private models pretrained on large public datasets. We critically review this approach. We primarily question whether the use of large Web-scraped datasets should be viewed as differential-privacy-preserving. We caution that publicizing these models pretrained on Web data as "private" could lead to harm and erode the public's trust in differential privacy as a meaningful definition of privacy. Beyond the privacy considerations of using public data, we further question the utility of this paradigm. We scrutinize whether existing machine learning benchmarks are appropriate for measuring the ability of pretrained models to generalize to sensitive domains, which may be poorly represented in public Web data. Finally, we notice that pretraining has been especially impactful for the largest available models -- models sufficiently large to prohibit end users running them on their own devices. Thus, deploying such models today could be a net loss for privacy, as it would require (private) data to be outsourced to a more compute-powerful third party. We conclude by discussing potential paths forward for the field of private learning, as public pretraining becomes more popular and powerful.
translated by 谷歌翻译
We study the relationship between adversarial robustness and differential privacy in high-dimensional algorithmic statistics. We give the first black-box reduction from privacy to robustness which can produce private estimators with optimal tradeoffs among sample complexity, accuracy, and privacy for a wide range of fundamental high-dimensional parameter estimation problems, including mean and covariance estimation. We show that this reduction can be implemented in polynomial time in some important special cases. In particular, using nearly-optimal polynomial-time robust estimators for the mean and covariance of high-dimensional Gaussians which are based on the Sum-of-Squares method, we design the first polynomial-time private estimators for these problems with nearly-optimal samples-accuracy-privacy tradeoffs. Our algorithms are also robust to a constant fraction of adversarially-corrupted samples.
translated by 谷歌翻译
Biomedical image segmentation is one of the fastest growing fields which has seen extensive automation through the use of Artificial Intelligence. This has enabled widespread adoption of accurate techniques to expedite the screening and diagnostic processes which would otherwise take several days to finalize. In this paper, we present an end-to-end pipeline to segment lungs from chest X-ray images, training the neural network model on the Japanese Society of Radiological Technology (JSRT) dataset, using UNet to enable faster processing of initial screening for various lung disorders. The pipeline developed can be readily used by medical centers with just the provision of X-Ray images as input. The model will perform the preprocessing, and provide a segmented image as the final output. It is expected that this will drastically reduce the manual effort involved and lead to greater accessibility in resource-constrained locations.
translated by 谷歌翻译
Breaking down a document or a conversation into multiple contiguous segments based on its semantic structure is an important and challenging problem in NLP, which can assist many downstream tasks. However, current works on topic segmentation often focus on segmentation of structured texts. In this paper, we comprehensively analyze the generalization capabilities of state-of-the-art topic segmentation models on unstructured texts. We find that: (a) Current strategies of pre-training on a large corpus of structured text such as Wiki-727K do not help in transferability to unstructured texts. (b) Training from scratch with only a relatively small-sized dataset of the target unstructured domain improves the segmentation results by a significant margin.
translated by 谷歌翻译
Explainability has been widely stated as a cornerstone of the responsible and trustworthy use of machine learning models. With the ubiquitous use of Deep Neural Network (DNN) models expanding to risk-sensitive and safety-critical domains, many methods have been proposed to explain the decisions of these models. Recent years have also seen concerted efforts that have shown how such explanations can be distorted (attacked) by minor input perturbations. While there have been many surveys that review explainability methods themselves, there has been no effort hitherto to assimilate the different methods and metrics proposed to study the robustness of explanations of DNN models. In this work, we present a comprehensive survey of methods that study, understand, attack, and defend explanations of DNN models. We also present a detailed review of different metrics used to evaluate explanation methods, as well as describe attributional attack and defense methods. We conclude with lessons and take-aways for the community towards ensuring robust explanations of DNN model predictions.
translated by 谷歌翻译